Search Results

Search found 13033 results on 522 pages for '12 04 1'.

Page 496/522 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

    - by Echiban
    I think I am stuck in the paralysis of analysis. Please help! I currently have a project that Uses NHibernate on SQLite Implements Repository and Unit of Work pattern: http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/10/nhibernate-and-the-unit-of-work-pattern.aspx MVVM strategy in a WPF app Unit of Work implementation in my case supports one NHibernate session at a time. I thought at the time that this makes sense; it hides inner workings of NHibernate session from ViewModel. Now, according to Oren Eini (Ayende): http://msdn.microsoft.com/en-us/magazine/ee819139.aspx He convinces the audience that NHibernate sessions should be created / disposed when the view associated with the presenter / viewmodel is disposed. He presents issues why you don't want one session per windows app, nor do you want a session to be created / disposed per transaction. This unfortunately poses a problem because my UI can easily have 10+ view/viewmodels present in an app. He is presenting using a MVP strategy, but does his advice translate to MVVM? Does this mean that I should scrap the unit of work and have viewmodel create NHibernate sessions directly? Should a WPF app only have one working session at a time? If that is true, when should I create / dispose a NHibernate session? And I still haven't considered how NHibernate Stateless sessions fit into all this! My brain is going to explode. Please help!

    Read the article

  • Git subtree not properly using .gitignore when doing a partial clone

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). When I use git subtree split -P bib -b export git checkout export I get a the bib directory, plus all files that should have been ignored or considered binary based on .gitignore such as the src directory and everything in it that ends in a tilde or the ./data directory. dwickrama@DWwork:~/research/trunk$ ls * -r biblography.bib JabRef src: script1.sh~ README~ script2.sh~ script3.sh~ script4.R~ script5.awk~ script5.py~ cfg: cfgFile1.ini~ cfgFile2.ini~ cfgFile3.ini~ bin: bigBinaryPackage1 bigBinaryPackage2 dwickrama@DWwork:~/research/trunk$ My .gitignore file is as follows: *.doc diff=word *.tex diff=tex *.bib diff=bibtex *.py diff=python *.eps binary *.jpg binary *.png binary ./bin/* binary *~ How do I prevent this?

    Read the article

  • Using git subtree to clone a subdirectory of a project with versioning history then merge it back af

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ This code is from that site: git clone git://git2.kernel.org/pub/scm/git/git.git newtree=$(git subtree split --prefix=gitweb --annotate='(split) ' \ 0a8f4f0^.. --onto=1130ef3 --rejoin) git branch latest_gitweb $newtree gitk latest_gitweb Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). How would I use git-subtree to split off the bib (bibliography) directory with versioning? When I use git-subtree split --prefix=bib I get 884842f6f4e9896e2e4e9402ee0ef762cd617257 as output, but I don't know where to go from there.

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • Android: ScrollView in flipper

    - by Manu
    I have a flipper: <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:id="@+id/ParentLayout" xmlns:android="http://schemas.android.com/apk/res/android" style="@style/MainLayout" > <LinearLayout android:id="@+id/FlipperLayout" style="@style/FlipperLayout"> <ViewFlipper android:id="@+id/viewflipper" style="@style/ViewFlipper"> <!--adding views to ViewFlipper--> <include layout="@layout/home1" android:layout_gravity="center_horizontal" /> <include layout="@layout/home2" android:layout_gravity="center_horizontal" /> </ViewFlipper> </LinearLayout> </LinearLayout> The first layout,home1, consists of a scroll view. What should I do to distinguish between the flipping gesture and the scrolling? Presently: if I remove the scroll view, I can swipe across if I add the scroll view, I can only scroll. I saw a suggestion that I should override onInterceptTouchEvent(MotionEvent), but I do not know how to do this. My code, at this moment, looks like this: public class HomeActivity extends Activity { -- declares @Override public void onCreate(Bundle savedInstanceState) { -- declares & preliminary actions LinearLayout layout = (LinearLayout) findViewById(R.id.ParentLayout); layout.setOnTouchListener(new OnTouchListener() { public boolean onTouch(View v, MotionEvent event) { if (gestureDetector.onTouchEvent(event)) { return true; } return false; }}); @Override public boolean onTouchEvent(MotionEvent event) { gestureDetector.onTouchEvent(event); return true; } class MyGestureDetector extends SimpleOnGestureListener { @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { // http://www.codeshogun.com/blog/2009/04/16/how-to-implement-swipe-action-in-android/ } } } Can anybody please guide me in the right direction? Thank you.

    Read the article

  • calling CreateFile, specifying FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE.

    - by alexander-daniels
    Before I describe my problem, here is a description of the program I'm writting: This is a C++ application. The purpose of my program is to create file on RAM memory. I read that if specify FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE when creating file it will be loaded direct to the RAM memory. One of blogs that talk about is this one: http://blogs.msdn.com/larryosterman/archive/2004/04/19/116084.aspx I have built a mini-program, but it not achieves the goal. Instead, it creates a file on hard-drive on directory I specify. Here's my program: void main () { LPCWSTR str = L"c:\temp.txt"; HANDLE fh = CreateFile(str,GENERIC_WRITE,0,NULL,CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE,NULL); if (fh == INVALID_HANDLE_VALUE) { printf ("Could not open TWO.TXT"); return; } DWORD dwBytesWritten; for (long i=0; i<20000000; i++) { WriteFile(fh, "This is a test\r\n", 16, &dwBytesWritten, NULL); } return; } I think there problem in CreateFile function, but I can't fix it. Please help me.

    Read the article

  • How to stream partial content with ASP.NET MVC FileStreamResult

    - by o_o
    We're using a FileStreamResult to provide video data to a Silverlight MediaElement based video player: public ActionResult Preview(Guid id) { return new FileStreamResult( Services.AssetStore.GetStream(id, ContentType.Preview), "application/octet-stream"); } Unfortunately, the Silverlight video player downloads the entire video file before it starts playing. This behavior is expected as our Preview Action does not support downloading partial content. (side note: if the file is hosted in an IIS virtual directory we can start playback at any location in the video while it is still downloading. however for security and auditing reasons we can't provide a direct download link. so this is not an option.) How can we improve the Controller Action to support partial HTTP content? I assume we first have to inform the client that we support it (adding an "Accept-Ranges:bytes" header to a HEAD request), then we have to evaluate the HTTP "Range" header and stream the requested file range with a response code of 206. Will that work with ASP.NET MVC hosted on IIS6? Is there already some code available? Also see: http://en.wikipedia.org/wiki/List_of_HTTP_headers http://blogs.msdn.com/anilkumargupta/archive/2009/04/29/downloadprogress-downloadprogressoffset-and-bufferprogress-of-the-mediaelement.aspx http://benramsey.com/archives/206-partial-content-and-range-requests/

    Read the article

  • WPF Memory Leak on XP (CMilChannel, HWND)

    - by vanja.
    My WPF application leaks memory at about 4kb/s. The memory usage in Task Manager climbs constantly until the application crashes with an "Out of Memory" exception. By doing my own research I have found that the problem is discussed here: http://stackoverflow.com/questions/801589/track-down-memory-leak-in-wpf and #8 here: http://blogs.msdn.com/jgoldb/archive/2008/02/04/finding-memory-leaks-in-wpf-based-applications.aspx The problem described is: This is a leak in WPF present in versions of the framework up to and including .NET 3.5 SP1. This occurs because of the way WPF selects which HWND to use to send messages from the render thread to the UI thread. This sample destroys the first HWND created and starts an animation in a new Window. This causes messages sent from the render thread to pile up without being processed, effectively leaking memory. The solution offered is: The workaround is to create a new HwndSource first thing in your App class constructor. This MUST be created before any other HWND is created by WPF. Simply by creating this HwndSource, WPF will use this to send messages from the render thread to the UI thread. This assures all messages will be processed, and that none will leak. But I don't understand the solution! I have a subclass of Application that I am using and I have tried creating a window in that constructor but that has not solved the problem. Following the instructions given literally, it looks like I just need to add this to my Application constructor: new HwndSource(new HwndSourceParameters("MyApplication"));

    Read the article

  • Remove file from git repository (history)

    - by Devenv
    (solved, see bottom of the question body) Looking for this for a long time now, what I have till now is: http://dound.com/2009/04/git-forever-remove-files-or-folders-from-history/ and http://progit.org/book/ch9-7.html Pretty much the same method, but both of them leave objects in pack files... Stuck. What I tried: git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_name' rm -Rf .git/refs/original rm -Rf .git/logs/ git gc Still have files in the pack, and this is how I know it: git verify-pack -v .git/objects/pack/pack-3f8c0...bb.idx | sort -k 3 -n | tail -3 And this: git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch file_name" HEAD rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune The same... Tried git clone trick, it removed some of the files (~3000 of them) but the largest files are still there... I have some large legacy files in the repository, ~200M, and I really don't want them there... And I don't want to reset the repository to 0 :( SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • SQL - Rank() on a table

    - by Abhi
    create table v (mydate,value) as select to_date('20/03/2010 00','dd/mm/yyyy HH24'),98 from dual union all select to_date('20/03/2010 01','dd/mm/yyyy HH24'),124 from dual union all select to_date('20/03/2010 02','dd/mm/yyyy HH24'),140 from dual union all select to_date('20/03/2010 03','dd/mm/yyyy HH24'),138 from dual union all select to_date('20/03/2010 04','dd/mm/yyyy HH24'),416 from dual union all select to_date('20/03/2010 05','dd/mm/yyyy HH24'),196 from dual union all select to_date('20/03/2010 06','dd/mm/yyyy HH24'),246 from dual union all select to_date('20/03/2010 07','dd/mm/yyyy HH24'),176 from dual union all select to_date('20/03/2010 08','dd/mm/yyyy HH24'),124 from dual union all select to_date('20/03/2010 09','dd/mm/yyyy HH24'),128 from dual union all select to_date('20/03/2010 10','dd/mm/yyyy HH24'),32010 from dual union all select to_date('20/03/2010 11','dd/mm/yyyy HH24'),384 from dual union all select to_date('20/03/2010 12','dd/mm/yyyy HH24'),368 from dual union all select to_date('20/03/2010 13','dd/mm/yyyy HH24'),392 from dual union all select to_date('20/03/2010 14','dd/mm/yyyy HH24'),374 from dual union all select to_date('20/03/2010 15','dd/mm/yyyy HH24'),350 from dual union all select to_date('20/03/2010 16','dd/mm/yyyy HH24'),248 from dual union all select to_date('20/03/2010 17','dd/mm/yyyy HH24'),396 from dual union all select to_date('20/03/2010 18','dd/mm/yyyy HH24'),388 from dual union all select to_date('20/03/2010 19','dd/mm/yyyy HH24'),360 from dual union all select to_date('20/03/2010 20','dd/mm/yyyy HH24'),194 from dual union all select to_date('20/03/2010 21','dd/mm/yyyy HH24'),234 from dual union all select to_date('20/03/2010 22','dd/mm/yyyy HH24'),328 from dual union all select to_date('20/03/2010 23','dd/mm/yyyy HH24'),216 from dual From this table, how to rank() over 'value', partitioning by each hour of the day? and select only the 1st ranked result?

    Read the article

  • Creating custom IP-STS for sharepoint foundation 2010 without ADFS

    - by user252229
    I plan to create very simple custom IP-STS for SharePoint foundation 2010 without ADFS server so anyone can integrate Windows Live ID to SharePoint foundation 2010 simply without ADFS, I can't use ADFS server because it could not install on Windows Web Server 2008 (Web Edition), also I found many article use LDAP provider but it does not exists in SharePoint Foundation too (it requires Sharepoint Server Edition). After too much searching I just found the following article and find all technique except one problem. 1) Creating Custom Claim Provider: blogs.technet.com/b/speschka/archive/2010/03/13/writing-a-custom-claims-provider-for-sharepoint-2010-part-1.aspx 2) Creating Custom STS Provider: http://blogs.msdn.com/b/chunliu/archive/2010/04/02/how-to-make-use-of-a-custom-ip-sts-with-sharepoint-2010-part-1.aspx Only one step remains: I got following error after enter username in STS site and redirect to localhost/_trust/default.aspx , ( I leave EncryptingCertificateName empty). Operation is not valid due to the current state of the object I expect to get access denied error instead of that error. 1.Is it possible anyway? 2.Can anyone help me where can I find working article to create custom IP-STS without ADFS server Any idea will help me Thanks

    Read the article

  • Top Container Background Problem

    - by Norbert
    Here's a screenshot: http://dl.getdropbox.com/u/118004/Screen%20shot%202010-04-13%20at%202.50.49%20PM.png The red bar on the left is the background I set for the #personal div and I would like it to align to the top of the container, vertically. The problem is that I have a background for the #container-top div on top of the #container div with absolute positioning. Is there any way to move the #personal div up so there would be no space left? HTML <div id="container"> <div id="container-top"></div> <div id="personal"> <h1>Jonathan Doe</h1> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliqua erat volutpat.</p> </div> <!-- end #personal --> </div> <!-- end #container --> CSS #container { background: url(images/bg-mid.png) repeat-y top center; width: 835px; margin: 40px auto; position: relative; } #container-top { background: url(images/bg-top.png) no-repeat top center; position: absolute; height: 12px; width: 835px; top: -12px; } #container-bottom { background: url(images/bg-bottom.png) no-repeat top center; position: absolute; height: 27px; width: 835px; bottom: -27px; } #personal { background: url(images/personal-info.png) no-repeat 0px left; }

    Read the article

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

  • forward/strong enum in VS2010

    - by Noah Roberts
    At http://blogs.msdn.com/vcblog/archive/2010/04/06/c-0x-core-language-features-in-vc10-the-table.aspx there is a table showing C++0x features that are implemented in 2010 RC. Among them are listed forwarding enums and strongly typed enums but they are listed as "partial". The main text of the article says that this means they are either incomplete or implemented in some non-standard way. So I've got VS2010RC and am playing around with the C++0x features. I can't figure these ones out and can't find any documentation on these two features. Not even the simplest attempts compile. enum class E { test }; int main() {} fails with: 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2332: 'enum' : missing tag name 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2236: unexpected 'class' 'E'. Did you forget a ';'? 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C3381: 'E' : assembly access specifiers are only available in code compiled with a /clr option 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2143: syntax error : missing ';' before '}' 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== int main() { enum E : short; } Fails with: 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(513): warning C4480: nonstandard extension used: specifying underlying type for enum 'main::E' 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(513): error C2059: syntax error : ';' ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== So it seems it must be some totally non-standard implementation that has allowed them to justify calling this feature "partially" done. How would I rewrite that code to access the forwarding and strong type feature?

    Read the article

  • jquery: prepopulating autocomplete fields

    - by David Tildon
    I'm using the tokenizing autocomplete plugin for jquery ( http://loopj.com/2009/04/25/jquery-plugin-tokenizing-autocomplete-text-entry ). I mostly use Ruby, and I'm really unfamiliar with javascript, though. My basic setup looks like this, and works fine for a new, blank form: $(document).ready(function () { $("#tag_ids_field").tokenInput("/tags", { queryParam: "search" }); }); The problem comes when I try to prepopulate it, like for an edit page. I'm trying to do something like this (where the "#tag_ids_field" text box contains the JSON when the page is loaded - that way is just cleaner on the application side of things). $(document).ready(function () { var tags = $("#tag_ids_field").html(); $("#tag_ids_field").tokenInput("/tags", { queryParam: "search", prePopulate: tags }); }); However, when the page loads I see that it's just filled with hundreds of entries that read 'undefined'. I get this even if I take the JSON output that Rails provides and try sticking it right in the .js file: $(document).ready(function () { $("#tag_ids_field").tokenInput("/tags", { queryParam: "search", prePopulate: "[{\"id\":\"44\",\"name\":\"omnis sit impedit et numquam voluptas enim\"},{\"id\":\"515\",\"name\":\"deserunt odit id doloremque reiciendis aliquid qui vel\"},{\"id\":\"943\",\"name\":\"exercitationem numquam possimus quasi iste nisi illum\"}]" }); }); That's obviously not a solution, I just tried it out of frustration and I get the same behavior. My two questions: One, why are my text boxes being filled with "undefined" tags when I try to prepopulate, and how can I get them to prepopulate successfully? Two, I'm planning on having many autocomplete fields like this on the same page (for when several records are edited at once - they all query the same place). How can I make each autocomplete field take it's prepopulated values from it's own textbox? Something like (applying these settings to all input boxes with a certain class, not just the one of a particular id): $(document).ready(function () { $(".tag_ids_field").tokenInput("/tags", { queryParam: "search", prePopulate: (the contents of that particular ".tag_ids_field" input box) }); });

    Read the article

  • iPhone multitouch - Some touches dispatch touchesBegan: but not touchesMoved:

    - by zkarcher
    I'm developing a multitouch application. One touch is expected to move, and I need to track its position. For all other touches, I need to track their beginnings and endings, but their movement is less critical. Sometimes, when 3 or more touches are active, my UIView does not receive touchesMoved: events for the moving touch. This problem is intermittent, and can always be reproduced after a few attempts: Touch the screen with 2 fingers. Touch the screen with another finger, and move this finger around. The moving finger always dispatches touchesBegan: and touchesEnded:, but sometimes does not dispatch any touchesMoved: events. Whenever the moving touch does not dispatch touchesMoved: events, I can force it to dispatch touchesMoved: if I move one of the other touches. This seems to "force" every touch to recheck its position, and I successfully receive a touchesMoved: event. However, this is clumsy. This bug is reproducible on both the iPhone 2G and 3GS models. My question is: How do I ensure that my moving touch dispatches touchesMoved: events? Does anyone have any experience with this issue? I've spent few fruitless days searching the web for answers. I found a post describing how to sync touch events with the VBL: http://www.71squared.com/2009/04/maingameloop-changes/ . However, this has not solved the problem. I really don't know how to proceed. Any help is appreciated!

    Read the article

  • High-concurrency counters without sharding

    - by dound
    This question concerns two implementations of counters which are intended to scale without sharding (with a tradeoff that they might under-count in some situations): http://appengine-cookbook.appspot.com/recipe/high-concurrency-counters-without-sharding/ (the code in the comments) http://blog.notdot.net/2010/04/High-concurrency-counters-without-sharding My questions: With respect to #1: Running memcache.decr() in a deferred, transactional task seems like overkill. If memcache.decr() is done outside the transaction, I think the worst-case is the transaction fails and we miss counting whatever we decremented. Am I overlooking some other problem that could occur by doing this? What are the significiant tradeoffs between the two implementations? Here are the tradeoffs I see: #2 does not require datastore transactions. To get the counter's value, #2 requires a datastore fetch while with #1 typically only needs to do a memcache.get() and memcache.add(). When incrementing a counter, both call memcache.incr(). Periodically, #2 adds a task to the task queue while #1 transactionally performs a datastore get and put. #1 also always performs memcache.add() (to test whether it is time to persist the counter to the datastore). Conclusions (without actually running any performance tests): #1 should typically be faster at retrieving a counter (#1 memcache vs #2 datastore). Though #1 has to perform an extra memcache.add() too. However, #2 should be faster when updating counters (#1 datastore get+put vs #2 enqueue a task). On the other hand, with #1 you have to be a bit more careful with the update interval since the task queue quota is almost 100x smaller than either the datastore or memcahce APIs.

    Read the article

  • implementing Ws-security within WCF proxy

    - by harrisonmeister
    Hi, I have imported an axis based wsdl into a VS 2008 project as a service reference. I need to be able to pass security details such as username/password and nonce values to call the axis based service. I have looked into doing it for wse, which i understand the world hates (no issues there) I have very little experience of WCF, but have worked how to physically call the endpoint now, thanks to SO, but have no idea how to set up the SoapHeaders as the schema below shows: <S:Envelope xmlns:S="http://www.w3.org/2001/12/soap-envelope" xmlns:ws="http://schemas.xmlsoap.org/ws/2002/04/secext"> <S:Header> <ws:Security> <ws:UsernameToken> <ws:Username>aarons</ws:Username> <ws:Password>snoraa</ws:Password> </ws:UsernameToken> </wsse:Security> ••• </S:Header> ••• </S:Envelope> Any help much appreciated Thanks, Mark

    Read the article

  • How to use routing in a ASP MVC website to localize in two languages - But keeping exiting URLs

    - by Anders Pedersen
    We have a couple ASP MVC websites just using the standard VS templates default settings - Working as wanted. But now I want to localize these website ( They are now in Dutch and I will add the English language ) I would like to use routing and not Resource because: 1. Languages will differ in content, numbers of pages, etc. 2. The content is mostly text. I would like the URLs to look some thing like this - www.domain.com/en/Home/Index, www.domain.nl/nl/Home/Index. But the last one should also work with - www.domain.nl/Home/Index - Witch is the exciting URLs. I have implemented Phil Haacks areas ViewEngine from this blogpost - http://haacked.com/archive/2008/11/04/areas-in-aspnetmvc.aspx. But only putting the English website in the areas and keeping the Dutch in old structure. Witch are served as Phils default fallback. But the problem is here that I have to duplicate my controllers for both language's. So I tried the work method described in this tread - http://stackoverflow.com/questions/1712167/asp-net-mvc-localization-route. It works OK with the ?en? and /nl/ but not with the old URLs. When using this code in the global.asax the URL without the culture isn't working. public static void RegisterRoutes(RouteCollection routes) { //routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{culture}/{controller}/{action}/{id}", // URL with parameters new { culture = "nl-NL", controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.MapRoute( "DefaultWitoutCulture", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } I properly overlooking some thing simple but I can't get this to work for me. Or are there a better way of doing this?

    Read the article

  • WCF, IIS6.0 (413) Request Entity Too Large.

    - by Andrew Kalashnikov
    Hello, guys. I've got annoyed problem. I've got WCF service(basicHttpBinding with Transport security Https). This service implements contract which consists 2 methods. LoadData. GetData. GetData works OK!. My client received pachage ~2Mb size without problems. All work correctly. But when I try load data by bool LoadData(Stream data); - signature of method I'll get (413) Request Entity Too Large. Stack Trace: Server stack trace: ? ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) I try this http://blogs.msdn.com/jiruss/archive/2007/04/13/http-413-request-entity-too-large-can-t-upload-large-files-using-iis6.aspx. But it doesn't work! My server is 2003 with IIS6.0. Please help.

    Read the article

  • how to read http response soap headers from web service response in proxy class

    - by Fabricio
    I'm having some problems with one webservice that i'm working with. I generated a proxy class with wsdl.exe that comes with .net framework. But that webservice return a header that isnt not mapped by the wsdl. I must map the header sop because it contains some properties that i have to read and work with. how can i read the soap's header collection? Ex.: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header xmlns="http://xml.amadeus.com/ws/2009/01/WBS_Session-2.0.xsd"> <Session> <SessionId>545784545</SessionId> <SequenceNumber>1</SequenceNumber> <SecurityToken>asd7a87sda89sd45as4d5a4</SecurityToken> </Session> </soap:Header> <soap:Body> <TAM_Altea_Seguranca_AutenticarRS xmlns="http://xml.amadeus.com/2009/04/TAM/TAM_Altea_Seguranca_AutenticarRS_2.0"> <statusDoProcesso> <codigoDoStatus>P</codigoDoStatus> </statusDoProcesso> </TAM_Altea_Seguranca_AutenticarRS> </soap:Body> </soap:Envelope> I need to read the SOAP:HEADER - Session.

    Read the article

  • Key Tips in WPF

    - by Brad Leach
    Office 2007 and the Ribbon introduced the concept of "Key Tips". In short, every single command in the Ribbon receives a letter which you can press to activate that command. ... The letters are indicated by small "KeyTips" which indicate the letter to press to activate the control. KeyTips are displayed using the Alt key, so using them feels similar to how menu navigation works in Windows. (Source: http://blogs.msdn.com/jensenh/archive/2006/04/12/574930.aspx) An example of the Key Tips can be shown as follows. In this diagram, the use has pressed the ALT key, and is awaiting further input. Are there any WPF Open Source examples of "Key Tips"? How would you go about implementing something like this feature in a generic way (i.e. not requiring a Ribbon)? How would you implement this using a MVVM pattern (given that ICommand does not support InputBindings). Note: ActiPro have implemented this feature in their implementation of a Ribbon, but they have not released source code.

    Read the article

  • equality on the sender of an event

    - by Berryl
    I have an interface for a UI widget, two of which are attributes of a presenter. public IMatrixWidget NonProjectActivityMatrix { set { // validate the incoming value and set the field _nonProjectActivityMatrix = value; .... // configure & load non-project activities } public IMatrixWidget ProjectActivityMatrix { set { // validate the incoming value and set the field _projectActivityMatrix = value; .... // configure & load project activities } The widget has an event that both presenter objects subscribe to, and so there is an event handler in the presenter like so: public void OnActivityEntry(object sender, EntryChangedEventArgs e) { // calculate newTotal here .... if (ReferenceEquals(sender, _nonProjectActivityMatrix)) { _nonProjectActivityMatrix.UpdateTotalHours(feedback.ActivityTotal); } else if (ReferenceEquals(sender, _projectActivityMatrix)) { _projectActivityMatrix.UpdateTotalHours(feedback.ActivityTotal); } else { // ERROR - we should never be here } } The problem is that the ReferenceEquals on the sender fails, even though it is the implemented widget that is the sender - the same implemented widget that was set to the presenter attribute! Can anyone spot what the problem / fix is? Cheers, Berryl I didn't know you could edit nicely. Cool. Here is the event raising code: void OnGridViewNumericUpDownEditingControl_ValueChanged(object sender, EventArgs e) { // omitted to save sapce if (EntryChanged == null) return; var args = new EntryChangedEventArgs(activityID, dayID, Convert.ToDouble(amount)); EntryChanged(this, args); } Here is the debugger dump of the presenter attribute, sans namespace info: ?_nonProjectActivityMatrix {WinPresentation.Widgets.MatrixWidgetDgv} [WinPresentation.Widgets.MatrixWidgetDgv]: {WinPresentation.Widgets.MatrixWidgetDgv} Here is the debugger dump of the sender: ?sender {WinPresentation.Widgets.MatrixWidgetDgv} base {Core.GUI.Widgets.Lookup.MatrixWidgetBase<Core.GUI.Widgets.Lookup.DynamicDisplayDto>}: {WinPresentation.Widgets.MatrixWidgetDgv} _configuration: {Domain.Presentation.Timesheet.Matrix.WeeklyMatrixConfiguration} _wrappedWidget: {Win.Widgets.DataGridViewDynamicLookupWidget} AllowUserToAddRows: true ColumnCount: 11 Count: 4 EntryChanged: {Method = {Void OnActivityEntry(System.Object, Smack.ConstructionAdmin.Domain.Presentation.Timesheet.Matrix.EntryChangedEventArgs)}} SelectedCell: {DataGridViewNumericUpDownCell { ColumnIndex=3, RowIndex=3 }} SelectedCellValue: "0.00" SelectedColumn: {DataGridViewNumericUpDownColumn { Name=MONDAY, Index=3 }} SelectedItem: {'AdministrativeActivity: 130-04', , AdministrativeTime, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00} Berryl

    Read the article

  • Postback not working with ASP.NET Routing (Validation of viewstate MAC failed)

    - by Robert
    Hi. I'm using the ASP.NET 3.5 SP1 System.Web.Routing with classic WebForms, as described in http://chriscavanagh.wordpress.com/2008/04/25/systemwebrouting-with-webforms-sample/ All works fine, I have custom SEO urls and even the postback works. But there is a case where the postback always fails and I get a: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Here is the scenario to reproduce the error: Create a standard webform mypage.aspx with a button Create a Route that maps "a/b/{id}" to "~/mypage.aspx" When you execute the site, you can navigate http://localhost:XXXX/a/b/something the page works. But when you press the button you get the error. The error doen't happen when the Route is just "a/{id}". It seems to be related to the number of sub-paths in the url. If there are at least 2 sub-paths the viewstate validation fails. You get the error even with EnableViewStateMac="false". Any ideas? Is it a bug? Thanks

    Read the article

  • How to mix Grammar (Rules) & Dictation (Free speech) with SpeechRecognizer in C#

    - by Lee Englestone
    I really like Microsofts latest speech recognition (and SpeechSynthesis) offerings. http://msdn.microsoft.com/en-us/library/ms554855.aspx http://estellasays.blogspot.com/2009/04/speech-recognition-in-cnet.html However I feel like I'm somewhat limited when using grammars. Don't get me wrong grammars are great for telling the speech recognition exactly what words / phrases to look out for, however what if I want it to recognise something i've not given it a heads up about? Or I want to parse a phrase which is half pre-determined command name and half random words? For example.. Scenario A - I say "Google [Oil Spill]" and I want it to open Google with search results for the term in brackets which could be anything. Scenario B - I say "Locate [Manchester]" and I want it to search for Manchester in Google Maps or anything else non pre-determined I want it to know that 'Google' and 'Locate' are commands and what comes after it are parameters (and could be anything). Question : Does anyone know how to mix the use of pre-determined grammars (words the speech recognition should recognise) and words not in its pre-determined grammar? Code fragments.. using System.Speech.Recognition; ... ... SpeechRecognizer rec = new SpeechRecognizer(); rec.SpeechRecognized += rec_SpeechRecognized; var c = new Choices(); c.Add("search"); var gb = new GrammarBuilder(c); var g = new Grammar(gb); rec.LoadGrammar(g); rec.Enabled = true; ... ... void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { if (e.Result.Text == "search") { string query = "How can I get a word not defined in Grammar recognised and passed into here!"; launchGoogle(query); } } ... ... private void launchGoogle(string term) { Process.Start("IEXPLORE", "google.com?q=" + term); }

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >